Lower Sava
Transformer Based Model for Predicting Rapid Impact Compaction Outcomes: A Case Study of Utapao International Airport
Youwai, Sompote, Detcheewa, Sirasak
It is often used in large infrastructure projects such as airports and highways, where the soil needs to support the weight of the structure and pavement (Cheng et al. 2021; Mohammed et al. 2013; Simpson et al. 2008; Spyropoulos et al. 2020; Tarawneh and Matraji 2014; Vukadin 2013). The effectiveness of RIC depends on various factors, such as the fine content of the soil, the compaction sequence, the energy applied, the stiffness of existing ground, the ground water characteristics and the soil drainage. These factors vary in different site conditions and need to be considered in the design of RIC to optimize the compaction method (Ghanbari and Hamidi 2014; Serridge and Synac 2006; Tarawneh and Matraji 2014). Therefore, it is recommended to conduct a trial before the actual construction. Predicting the engineering properties of the ground improved by Rapid Impact Compaction (RIC) is a challenging task for geotechnical engineers.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Montana > Roosevelt County (0.04)
- (3 more...)
- Construction & Engineering (0.88)
- Transportation > Infrastructure & Services > Airport (0.50)
- Transportation > Air (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Sequence to sequence pretraining for a less-resourced Slovenian language
Ulčar, Matej, Robnik-Šikonja, Marko
Large pretrained language models have recently conquered the area of natural language processing. As an alternative to predominant masked language modelling introduced in BERT, the T5 model has introduced a more general training objective, namely sequence to sequence transformation, which includes masked language model but more naturally fits text generation tasks such as machine translation, summarization, question answering, text simplification, dialogue systems, etc. The monolingual variants of T5 models have been limited to well-resourced languages, while the massively multilingual T5 model supports 101 languages. In contrast, we trained two different sized T5-type sequence to sequence models for morphologically rich Slovene language with much less resources and analyzed their behavior on 11 tasks. Concerning classification tasks, the SloT5 models mostly lag behind the monolingual Slovene SloBERTa model but are useful for the generative tasks.
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.05)
- Europe > Germany (0.04)
- Europe > Slovenia > Upper Carniola > Municipality of Kranj > Kranj (0.04)
- (6 more...)